92 research outputs found

    A human genome-wide library of local phylogeny predictions for whole-genome inference problems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Many common inference problems in computational genetics depend on inferring aspects of the evolutionary history of a data set given a set of observed modern sequences. Detailed predictions of the full phylogenies are therefore of value in improving our ability to make further inferences about population history and sources of genetic variation. Making phylogenetic predictions on the scale needed for whole-genome analysis is, however, extremely computationally demanding.</p> <p>Results</p> <p>In order to facilitate phylogeny-based predictions on a genomic scale, we develop a library of maximum parsimony phylogenies within local regions spanning all autosomal human chromosomes based on Haplotype Map variation data. We demonstrate the utility of this library for population genetic inferences by examining a tree statistic we call 'imperfection,' which measures the reuse of variant sites within a phylogeny. This statistic is significantly predictive of recombination rate, shows additional regional and population-specific conservation, and allows us to identify outlier genes likely to have experienced unusual amounts of variation in recent human history.</p> <p>Conclusion</p> <p>Recent theoretical advances in algorithms for phylogenetic tree reconstruction have made it possible to perform large-scale inferences of local maximum parsimony phylogenies from single nucleotide polymorphism (SNP) data. As results from the imperfection statistic demonstrate, phylogeny predictions encode substantial information useful for detecting genomic features and population history. This data set should serve as a platform for many kinds of inferences one may wish to make about human population history and genetic variation.</p

    Tracking hands in action for gesture-based computer input

    Get PDF
    This thesis introduces new methods for markerless tracking of the full articulated motion of hands and for informing the design of gesture-based computer input. Emerging devices such as smartwatches or virtual/augmented reality glasses are in need of new input devices for interaction on the move. The highly dexterous human hands could provide an always-on input capability without the actual need to carry a physical device. First, we present novel methods to address the hard computer vision-based hand tracking problem under varying number of cameras, viewpoints, and run-time requirements. Second, we contribute to the design of gesture-based interaction techniques by presenting heuristic and computational approaches. The contributions of this thesis allow users to effectively interact with computers through markerless tracking of hands and objects in desktop, mobile, and egocentric scenarios.Diese Arbeit stellt neue Methoden für die markerlose Verfolgung der vollen Artikulation der Hände und für die Informierung der Gestaltung der Gestik-Computer-Input. Emerging-Geräte wie Smartwatches oder virtuelle / Augmented-Reality-Brillen benötigen neue Eingabegeräte für Interaktion in Bewegung. Die sehr geschickten menschlichen Hände konnten eine immer-on-Input-Fähigkeit, ohne die tatsächliche Notwendigkeit, ein physisches Gerät zu tragen. Zunächst stellen wir neue Verfahren vor, um das visionbasierte Hand-Tracking-Problem des Hardcomputers unter variierender Anzahl von Kameras, Sichtweisen und Laufzeitanforderungen zu lösen. Zweitens tragen wir zur Gestaltung von gesture-basierten Interaktionstechniken bei, indem wir heuristische und rechnerische Ansätze vorstellen. Die Beiträge dieser Arbeit ermöglichen es Benutzern, effektiv interagieren mit Computern durch markerlose Verfolgung von Händen und Objekten in Desktop-, mobilen und egozentrischen Szenarien

    Tracking hands in action for gesture-based computer input

    Get PDF
    This thesis introduces new methods for markerless tracking of the full articulated motion of hands and for informing the design of gesture-based computer input. Emerging devices such as smartwatches or virtual/augmented reality glasses are in need of new input devices for interaction on the move. The highly dexterous human hands could provide an always-on input capability without the actual need to carry a physical device. First, we present novel methods to address the hard computer vision-based hand tracking problem under varying number of cameras, viewpoints, and run-time requirements. Second, we contribute to the design of gesture-based interaction techniques by presenting heuristic and computational approaches. The contributions of this thesis allow users to effectively interact with computers through markerless tracking of hands and objects in desktop, mobile, and egocentric scenarios.Diese Arbeit stellt neue Methoden für die markerlose Verfolgung der vollen Artikulation der Hände und für die Informierung der Gestaltung der Gestik-Computer-Input. Emerging-Geräte wie Smartwatches oder virtuelle / Augmented-Reality-Brillen benötigen neue Eingabegeräte für Interaktion in Bewegung. Die sehr geschickten menschlichen Hände konnten eine immer-on-Input-Fähigkeit, ohne die tatsächliche Notwendigkeit, ein physisches Gerät zu tragen. Zunächst stellen wir neue Verfahren vor, um das visionbasierte Hand-Tracking-Problem des Hardcomputers unter variierender Anzahl von Kameras, Sichtweisen und Laufzeitanforderungen zu lösen. Zweitens tragen wir zur Gestaltung von gesture-basierten Interaktionstechniken bei, indem wir heuristische und rechnerische Ansätze vorstellen. Die Beiträge dieser Arbeit ermöglichen es Benutzern, effektiv interagieren mit Computern durch markerlose Verfolgung von Händen und Objekten in Desktop-, mobilen und egozentrischen Szenarien

    Real-Time Hand Tracking Using a Sum of Anisotropic Gaussians Model

    Full text link
    Real-time marker-less hand tracking is of increasing importance in human-computer interaction. Robust and accurate tracking of arbitrary hand motion is a challenging problem due to the many degrees of freedom, frequent self-occlusions, fast motions, and uniform skin color. In this paper, we propose a new approach that tracks the full skeleton motion of the hand from multiple RGB cameras in real-time. The main contributions include a new generative tracking method which employs an implicit hand shape representation based on Sum of Anisotropic Gaussians (SAG), and a pose fitting energy that is smooth and analytically differentiable making fast gradient based pose optimization possible. This shape representation, together with a full perspective projection model, enables more accurate hand modeling than a related baseline method from literature. Our method achieves better accuracy than previous methods and runs at 25 fps. We show these improvements both qualitatively and quantitatively on publicly available datasets.Comment: 8 pages, Accepted version of paper published at 3DV 201

    New Frontiers in Chemical Energy and Environmental Engineering

    Get PDF
    (First paragraph) Energy is one of the major building blocks of modern society. Industries are attributed to be the main source of environmental pollution. The problems associated with the energy and environment have now become the subject of international debate. Engineers play vital role in devising environmental pollution mitigating techniques and developing sustainable energy technologies

    GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB

    Full text link
    We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage

    Field trials with compounded feed developed by C M F R I for Penaeus indicus

    Get PDF
    Field trials were carried out with compounded feed at a shrimp farm adopted by CMFRI under its extension programme at South Chellanam, Cochin. The coconut grove pond aggregating around 10 cents of water area having a depth of about 1 metre was stocked with 3,000 Nos of P.indicus seed

    Direct maximum parsimony phylogeny reconstruction from genotype data

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes.</p> <p>Results</p> <p>In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes.</p> <p>Conclusion</p> <p>Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.</p

    Single-Shot Multi-Person 3D Pose Estimation From Monocular RGB

    Full text link
    We propose a new single-shot method for multi-person 3D pose estimation in general scenes from a monocular RGB camera. Our approach uses novel occlusion-robust pose-maps (ORPM) which enable full body pose inference even under strong partial occlusions by other people and objects in the scene. ORPM outputs a fixed number of maps which encode the 3D joint locations of all people in the scene. Body part associations allow us to infer 3D pose for an arbitrary number of people without explicit bounding box prediction. To train our approach we introduce MuCo-3DHP, the first large scale training data set showing real images of sophisticated multi-person interactions and occlusions. We synthesize a large corpus of multi-person images by compositing images of individual people (with ground truth from mutli-view performance capture). We evaluate our method on our new challenging 3D annotated multi-person test set MuPoTs-3D where we achieve state-of-the-art performance. To further stimulate research in multi-person 3D pose estimation, we will make our new datasets, and associated code publicly available for research purposes.Comment: International Conference on 3D Vision (3DV), 201
    corecore